Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
1.
30th International Conference on Computers in Education Conference, ICCE 2022 ; 1:89-94, 2022.
Article in English | Scopus | ID: covidwho-2288876

ABSTRACT

The global education sector has been deeply shaken by COVID-19 and forced to shift to an online teaching model. However, the lack of face-to-face communication and interaction in online learning is critical to high-quality teaching and learning. Research on engagement is a crucial part of solving this problem. Because engagement is of time-series data with an ongoing change, research datasets used for engagement analysis need a certain preprocessing method to capture time-series related engagement features. This research proposed a novel deep learning preprocessing method for improving engagement estimation using time-series facial and body information to restore traditional scenes in online learning environments. Such information includes head pose, mouth shape, eye movement, and body distance from the screen. We conducted a preliminary experiment on the DAiSEE dataset for engagement estimation. We applied skipped moving average in data preprocessing to reduce the influence of the extracted noises and oversampled the low engagement level data to balance the engaged/unengaged data. Since engagement is continuous and cannot be captured at a particular instant in time or single images, temporal video classification generally performs better than static classifiers. Therefore, we adopted long short-term memory (LSTM) and Quasi-recurrent neural networks (QRNNs)sequence models to train models and achieved the correct rate of 55.7% (LSTM) and 51.1% (QRNN) using the original key points extracted from OpenPose. Finally, we proposed the optimization structure network achieved the engagement estimation correct rate of 68.5% in proposed LSTM models and 64.2% in QRNN models. The achieved correct rate is 10% higher than the baseline in the DAiSEE dataset. © 30th International Conference on Computers in Education Conference, ICCE 2022 - Proceedings.

2.
2022 International Joint Conference - 17th International Joint Symposium on Artificial Intelligence and Natural Language Processing, iSAI-NLP 2022 and 3rd International Conference on Artificial Intelligence and Internet of Things, AIoT 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2191965

ABSTRACT

The impact of COVID-19 has led to the shift of job interviews online. There is now a return to face-to-face interviews in important situations, such as the final interview. However, it is still difficult to practice face-to-face interviews, and there is a growing need to practice face-to-face interviews alone or remotely. The problems with practicing interviews alone are that there is no listener in front of the practitioner, so the practitioner does not feel the nervousness about being watched and evaluated. In this paper, we aim to support these issues by using a small communication robot. We conduct experiments under six conditions: practicing alone, with a person face-to-face, with an autonomous robot, with a teleoperated robot, with an avatar remotely, and with a person remotely. Then we examine the influence of the practice style, such as the practitioner's nervousness. The results suggest that the most effective practice is possible when practicing with a person, regardless of whether it is face-to-face or remotely, but that the interview practice support with a small communicative robot is useful in the current social situation. © 2022 IEEE.

3.
16th International Joint Symposium on Artificial Intelligence and Natural Language Processing, iSAI-NLP 2021 ; 2021.
Article in English | Scopus | ID: covidwho-1731021

ABSTRACT

The need for presentation skills is increasing year by year, and presentation lectures are being held in companies and universities. In particular, it is important to communicate interactively with the audience during the presentation. Currently, due to the influence of COVID-19, there are more and more opportunities for presentations in a hybrid with face-to-face and remote audiences. In a hybrid presentation, it is difficult to communicate with both audiences in the same way because there is a difference in awareness between face-to-face and remote audiences due to the influence of presence information. In this paper, we propose a method to support the awareness of remote audiences by sending vibration notifications to the presenter during the presentation in order to promote communication and support the improvement of presentation skills, and confirm the usefulness of the method. © 2021 IEEE.

4.
2nd International Conference on Artificial Intelligence in HCI, AI-HCI 2021, Held as Part of the 23rd HCI International Conference, HCII 2021 ; 12797 LNAI:541-552, 2021.
Article in English | Scopus | ID: covidwho-1359844

ABSTRACT

In recent years, online learning plays an essential part in education due to distance learning technology development and control of COVID-19. In this context, engagement, a mental state to enhance the learning process, has been brought into the limelight. However, the existing engagement datasets are of a small scale and not suitable for education time-series research. We proposed an estimation method on time-series face and body features captured by built-in PC cameras to improve the engagement estimation on small and irregularly wild datasets. We designed upper body features using the facial and body key points extracted from OpenPose. To reduce the influence of the extracted noises from OpenPose, the moving average, the average value of a fixed period in the videos, is used to process the training data. Then, we compose a time-series dataset of online tasks with 19 participants. In the composed dataset, there remained self-reports of participants’ mental state and external observation to confirm the different engagement levels in the answering process. The combined self-reports and external observation results were used as the engagement label. Finally, the transfer learning was used to solve the insufficient data issue. We pre-trained a long short-term memory (LSTM) sequence deep learning model on a big dataset and transferred the trained model to share learned feature extraction and retrain our dataset. Our proposed method achieved 63.7% in experiments and could apply to estimate and detection engagement in future works. © 2021, Springer Nature Switzerland AG.

SELECTION OF CITATIONS
SEARCH DETAIL